On orthogonality of will and problemsolving: AFAICT, you agree with EY/the LW consensus that Intelligence is orthogonal to rationality/motivation is orthogonal to what goals one has. One can be any of intelligent (able form correct beliefs and solve problems), rational (forming correct beliefs and solving problems to the best of one’s ability), or moral (having some particular set of goals) without necessarily being any of the others. Low intelligence makes rationality less useful and low rationality makes intelligence less useful/apparent, but one can be present without the other. As I said, as far as I know there’s no disagreement here on any of this.
In a couple of places you seem to be confusing narrow AI with general AI. For instance,
Why would optimizing compiler that can optimize it’s ability to optimize, suddenly emerge will? It could foom all right, but it wouldn’t get out and start touching itself from outside;
An optimizing compiler is a narrow AI; it only “knows” how to deal with code, only cares about changing its input program, and has no ability to learn how to change its own hardware. An AGI can see that changing its hardware or environment can help it achieve its goals and figure out how to do so. Also, nitpick: an optimizing compiler can only increase the speed with which it can optimize itself. It can’t find more effective ways to optimize code than it already “knows”.
it seems likely that some automated software development tool would foom, reaching close to absolute maximum optimality on certain hardware.
If a program can perfectly reach it’s goals/get max utility in the easiest way possible without taking over the world, then it won’t. The AIs that will have large effects on the world, for good or ill, will be the ones that can think about all domains and have utility functions defined over the whole state of the world.
Also, these all seem like meta-points, or meta-arguments. For instance, you say:
The arguments for are pretty bad upon closer scrutiny,
but don’t post the detailed results of your scrutiny or the refutations of those arguments. You say:
One can make equally good arguments for the opposite point of view
but don’t post them.
This suggests that you have another post or big comment out there where you wrote this stuff, or that you have a lot of relevant thoughts on the subject you haven’t written up here yet. Could you add the other material and/or link to it? It’ll make the discussion easier if all your ideas are in one place.
The arguments for are pretty bad upon closer scrutiny,
but don’t post the detailed results of your scrutiny or the refutations of those arguments.
Playing the burden of proof game won’t help you at all. If you want to convince people of AI risks then you have to improve your arguments if they tell you that your current arguments are unconvincing. It is not the obligation of those who are not convinced to refute your bad arguments (which might not even be possible if they are vague enough).
It’s true that he has no responsibility per se to tell us what’s wrong with our arguments, but I can still ask him without claiming he has the burden of proof. I’m willing to accept the burden of proof, and “That’s not enough evidence, I’m not convinced” is a valid response, but if he has any specific reasons beyond that I want to know them.
In general I agree with this but he specifically mentions equally good arguments for the opposing view, without describing or linking to them. If I say “that’s a bad argument, you’ll have to do better to convince me” That’s one thing but it’s quite another when I say “That’s a bad argument because of what x said” and then not say what x said.
Playing the burden of proof game won’t help you at all. If you want to convince people of AI risks then you have to improve your arguments if they tell you that your current arguments are unconvincing.
Indeed. I’m wondering how many people complaining of the tone of this post have previously declared Crocker’s Rules.
On orthogonality of will and problemsolving: AFAICT, you agree with EY/the LW consensus that Intelligence is orthogonal to rationality/motivation is orthogonal to what goals one has. One can be any of intelligent (able form correct beliefs and solve problems), rational (forming correct beliefs and solving problems to the best of one’s ability), or moral (having some particular set of goals) without necessarily being any of the others. Low intelligence makes rationality less useful and low rationality makes intelligence less useful/apparent, but one can be present without the other. As I said, as far as I know there’s no disagreement here on any of this.
In a couple of places you seem to be confusing narrow AI with general AI. For instance,
An optimizing compiler is a narrow AI; it only “knows” how to deal with code, only cares about changing its input program, and has no ability to learn how to change its own hardware. An AGI can see that changing its hardware or environment can help it achieve its goals and figure out how to do so. Also, nitpick: an optimizing compiler can only increase the speed with which it can optimize itself. It can’t find more effective ways to optimize code than it already “knows”.
If a program can perfectly reach it’s goals/get max utility in the easiest way possible without taking over the world, then it won’t. The AIs that will have large effects on the world, for good or ill, will be the ones that can think about all domains and have utility functions defined over the whole state of the world.
Also, these all seem like meta-points, or meta-arguments. For instance, you say:
but don’t post the detailed results of your scrutiny or the refutations of those arguments. You say:
but don’t post them.
This suggests that you have another post or big comment out there where you wrote this stuff, or that you have a lot of relevant thoughts on the subject you haven’t written up here yet. Could you add the other material and/or link to it? It’ll make the discussion easier if all your ideas are in one place.
Playing the burden of proof game won’t help you at all. If you want to convince people of AI risks then you have to improve your arguments if they tell you that your current arguments are unconvincing. It is not the obligation of those who are not convinced to refute your bad arguments (which might not even be possible if they are vague enough).
It’s true that he has no responsibility per se to tell us what’s wrong with our arguments, but I can still ask him without claiming he has the burden of proof. I’m willing to accept the burden of proof, and “That’s not enough evidence, I’m not convinced” is a valid response, but if he has any specific reasons beyond that I want to know them.
In general I agree with this but he specifically mentions equally good arguments for the opposing view, without describing or linking to them. If I say “that’s a bad argument, you’ll have to do better to convince me” That’s one thing but it’s quite another when I say “That’s a bad argument because of what x said” and then not say what x said.
Indeed. I’m wondering how many people complaining of the tone of this post have previously declared Crocker’s Rules.